The goal of this paper is to use multi-task learning to efficiently scaleslot filling models for natural language understanding to handle multipletarget tasks or domains. The key to scalability is reducing the amount oftraining data needed to learn a model for a new task. The proposed multi-taskmodel delivers better performance with less data by leveraging patterns that itlearns from the other tasks. The approach supports an open vocabulary, whichallows the models to generalize to unseen words, which is particularlyimportant when very little training data is used. A newly collectedcrowd-sourced data set, covering four different domains, is used to demonstratethe effectiveness of the domain adaptation and open vocabulary techniques.
展开▼